50 research outputs found
Unsupervised Anomaly Localization with Structural Feature-Autoencoders
Unsupervised Anomaly Detection has become a popular method to detect
pathologies in medical images as it does not require supervision or labels for
training. Most commonly, the anomaly detection model generates a "normal"
version of an input image, and the pixel-wise -difference of the two is
used to localize anomalies. However, large residuals often occur due to
imperfect reconstruction of the complex anatomical structures present in most
medical images. This method also fails to detect anomalies that are not
characterized by large intensity differences to the surrounding tissue. We
propose to tackle this problem using a feature-mapping function that transforms
the input intensity images into a space with multiple channels where anomalies
can be detected along different discriminative feature maps extracted from the
original image. We then train an Autoencoder model in this space using
structural similarity loss that does not only consider differences in intensity
but also in contrast and structure. Our method significantly increases
performance on two medical data sets for brain MRI. Code and experiments are
available at https://github.com/FeliMe/feature-autoencoderComment: 10 pages, 5 figures, one table, accepted to the MICCAI 2021 BrainLes
Worksho
Topologically faithful image segmentation via induced matching of persistence barcodes
Image segmentation is a largely researched field where neural networks find
vast applications in many facets of technology. Some of the most popular
approaches to train segmentation networks employ loss functions optimizing
pixel-overlap, an objective that is insufficient for many segmentation tasks.
In recent years, their limitations fueled a growing interest in topology-aware
methods, which aim to recover the correct topology of the segmented structures.
However, so far, none of the existing approaches achieve a spatially correct
matching between the topological features of ground truth and prediction.
In this work, we propose the first topologically and feature-wise accurate
metric and loss function for supervised image segmentation, which we term Betti
matching. We show how induced matchings guarantee the spatially correct
matching between barcodes in a segmentation setting. Furthermore, we propose an
efficient algorithm to compute the Betti matching of images. We show that the
Betti matching error is an interpretable metric to evaluate the topological
correctness of segmentations, which is more sensitive than the well-established
Betti number error. Moreover, the differentiability of the Betti matching loss
enables its use as a loss function. It improves the topological performance of
segmentation networks across six diverse datasets while preserving the
volumetric performance. Our code is available in
https://github.com/nstucki/Betti-matching
Link Prediction for Flow-Driven Spatial Networks
Link prediction algorithms aim to infer the existence of connections (or
links) between nodes in network-structured data and are typically applied to
refine the connectivity among nodes. In this work, we focus on link prediction
for flow-driven spatial networks, which are embedded in a Euclidean space and
relate to physical exchange and transportation processes (e.g., blood flow in
vessels or traffic flow in road networks). To this end, we propose the Graph
Attentive Vectors (GAV) link prediction framework. GAV models simplified
dynamics of physical flow in spatial networks via an attentive,
neighborhood-aware message-passing paradigm, updating vector embeddings in a
constrained manner. We evaluate GAV on eight flow-driven spatial networks given
by whole-brain vessel graphs and road networks. GAV demonstrates superior
performances across all datasets and metrics and outperformed the
state-of-the-art on the ogbl-vessel benchmark at the time of submission by 12%
(98.38 vs. 87.98 AUC). All code is publicly available on GitHub
Surface Normal Estimation with Transformers
We propose the use of a Transformer to accurately predict normals from point
clouds with noise and density variations. Previous learning-based methods
utilize PointNet variants to explicitly extract multi-scale features at
different input scales, then focus on a surface fitting method by which local
point cloud neighborhoods are fitted to a geometric surface approximated by
either a polynomial function or a multi-layer perceptron (MLP). However,
fitting surfaces to fixed-order polynomial functions can suffer from
overfitting or underfitting, and learning MLP-represented hyper-surfaces
requires pre-generated per-point weights. To avoid these limitations, we first
unify the design choices in previous works and then propose a simplified
Transformer-based model to extract richer and more robust geometric features
for the surface normal estimation task. Through extensive experiments, we
demonstrate that our Transformer-based method achieves state-of-the-art
performance on both the synthetic shape dataset PCPNet, and the real-world
indoor scene dataset SceneNN, exhibiting more noise-resilient behavior and
significantly faster inference. Most importantly, we demonstrate that the
sophisticated hand-designed modules in existing works are not necessary to
excel at the task of surface normal estimation
Physiology-based simulation of the retinal vasculature enables annotation-free segmentation of OCT angiographs
Optical coherence tomography angiography (OCTA) can non-invasively image the eye's circulatory system. In order to reliably characterize the retinal vasculature, there is a need to automatically extract quantitative metrics from these images. The calculation of such biomarkers requires a precise semantic segmentation of the blood vessels. However, deep-learning-based methods for segmentation mostly rely on supervised training with voxel-level annotations, which are costly to obtain. In this work, we present a pipeline to synthesize large amounts of realistic OCTA images with intrinsically matching ground truth labels; thereby obviating the need for manual annotation of training data. Our proposed method is based on two novel components: 1) a physiology-based simulation that models the various retinal vascular plexuses and 2) a suite of physics-based image augmentations that emulate the OCTA image acquisition process including typical artifacts. In extensive benchmarking experiments, we demonstrate the utility of our synthetic data by successfully training retinal vessel segmentation algorithms. Encouraged by our method's competitive quantitative and superior qualitative performance, we believe that it constitutes a versatile tool to advance the quantitative analysis of OCTA images
A Deep Learning Approach to Predicting Collateral Flow in Stroke Patients Using Radiomic Features from Perfusion Images
Collateral circulation results from specialized anastomotic channels which are capable of providing oxygenated blood to regions with compromised blood flow caused by ischemic injuries. The quality of collateral circulation has been established as a key factor in determining the likelihood of a favorable clinical outcome and goes a long way to determine the choice of stroke care model - that is the decision to transport or treat eligible patients immediately.
Though there exist several imaging methods and grading criteria for quantifying collateral blood flow, the actual grading is mostly done through manual inspection of the acquired images. This approach is associated with a number of challenges. First, it is time-consuming - the clinician needs to scan through several slices of images to ascertain the region of interest before deciding on what severity grade to assign to a patient. Second, there is a high tendency for bias and inconsistency in the final grade assigned to a patient depending on the experience level of the clinician.
We present a deep learning approach to predicting collateral flow grading in stroke patients based on radiomic features extracted from MR perfusion data. First, we formulate a region of interest detection task as a reinforcement learning problem and train a deep learning network to automatically detect the occluded region within the 3D MR perfusion volumes. Second, we extract radiomic features from the obtained region of interest through local image descriptors and denoising auto-encoders. Finally, we apply a convolutional neural network and other machine learning classifiers to the extracted radiomic features to automatically predict the collateral flow grading of the given patient volume as one of three severity classes - no flow (0), moderate flow (1), and good flow (2)..
Physiology-based simulation of the retinal vasculature enables annotation-free segmentation of OCT angiographs
Optical coherence tomography angiography (OCTA) can non-invasively image the
eye's circulatory system. In order to reliably characterize the retinal
vasculature, there is a need to automatically extract quantitative metrics from
these images. The calculation of such biomarkers requires a precise semantic
segmentation of the blood vessels. However, deep-learning-based methods for
segmentation mostly rely on supervised training with voxel-level annotations,
which are costly to obtain. In this work, we present a pipeline to synthesize
large amounts of realistic OCTA images with intrinsically matching ground truth
labels; thereby obviating the need for manual annotation of training data. Our
proposed method is based on two novel components: 1) a physiology-based
simulation that models the various retinal vascular plexuses and 2) a suite of
physics-based image augmentations that emulate the OCTA image acquisition
process including typical artifacts. In extensive benchmarking experiments, we
demonstrate the utility of our synthetic data by successfully training retinal
vessel segmentation algorithms. Encouraged by our method's competitive
quantitative and superior qualitative performance, we believe that it
constitutes a versatile tool to advance the quantitative analysis of OCTA
images.Comment: Accepted at MICCAI 202
A deep learning approach to predict collateral flow in stroke patients using radiomic features from perfusion images.
Collateral circulation results from specialized anastomotic channels which are capable of providing oxygenated blood to regions with compromised blood flow caused by arterial obstruction. The quality of collateral circulation has been established as a key factor in determining the likelihood of a favorable clinical outcome and goes a long way to determining the choice of a stroke care model. Though many imaging and grading methods exist for quantifying collateral blood flow, the actual grading is mostly done through manual inspection. This approach is associated with a number of challenges. First, it is time-consuming. Second, there is a high tendency for bias and inconsistency in the final grade assigned to a patient depending on the experience level of the clinician. We present a multi-stage deep learning approach to predict collateral flow grading in stroke patients based on radiomic features extracted from MR perfusion data. First, we formulate a region of interest detection task as a reinforcement learning problem and train a deep learning network to automatically detect the occluded region within the 3D MR perfusion volumes. Second, we extract radiomic features from the obtained region of interest through local image descriptors and denoising auto-encoders. Finally, we apply a convolutional neural network and other machine learning classifiers to the extracted radiomic features to automatically predict the collateral flow grading of the given patient volume as one of three severity classes - no flow (0), moderate flow (1), and good flow (2). Results from our experiments show an overall accuracy of 72% in the three-class prediction task. With an inter-observer agreement of 16% and a maximum intra-observer agreement of 74% in a similar experiment, our automated deep learning approach demonstrates a performance comparable to expert grading, is faster than visual inspection, and eliminates the problem of grading bias